Author: Néstor Espinoza (nespinoza@stsci.edu) | Latest update: August 23, 2020.
This notebook is part of a series of notebooks that are being prepared by STScI in order to showcase how to simulate, process and analyze JWST observations for a wide range of science cases. Here, we touch on the transiting exoplanet observations science case and, in particular, on spectrophotometric observations of the primary transit of an exoplanet. During primary transit, the observed flux decrease due to the planet blocking light from the stellar host is proportional to $\delta = (R_p/R_*)^2$ --- a quantity known as the transit depth, where $R_p$ is the planetary radius and $R_*$ is the stellar radius. Interestingly, the transit depth is wavelength dependent; i.e., $\delta \equiv \delta (\lambda)$. This is because opacity sources on the planetary atmosphere absorb different amounts of light at different wavelengths and, therefore, the observed planetary radius $R_p$ --- and thus occulted area during transit, $\delta$ --- is wavelength-dependent (see, e.g., Kreidberg 2018 for a review). This technique, referred to as transmission spectroscopy in what follows, aims at obtaining those transit depths as a function of wavelength in JWST observations through the study of ultra-precise transit lightcurves at different wavelengths.
Simulated data for NIRISS/SOSS observations targeting a transit of HAT-P-1b have been generated with the help of the awesimsoss simulator by the NIRISS team. These simulations, in turn, have been calibrated using the JWST pipeline. HAT-P-1b is a Guaranteed Time Observations (GTO) target of the NIRISS Exploration of the Atmospheric diversity of Transiting exoplanets (NEAT) program, which will be observed using the NIRISS/SOSS instrument onboard JWST. This target is also used as an example science program used in the JWST documentation. If you are not familiar with transiting exoplanet observations and/or with how JWST will observe planetary transits, we encourage you to read through that example science program in order to familiarize yourself with the terminology.
In this notebook, we analyze the 2D spectral products of the JWST pipeline in order to analyze the transmission spectrum of this exoplanet. In particular, the data we take a look at here has been processed by the JWST pipeline up to the assign_wcs step of the pipeline's Stage 2 processing. The simulations didn't include flat-fielding, so we don't apply the flat-field step of the pipeline to them. Although the JWST Pipeline will eventually be able to produce white-light lightcurves, as well as extracted 1D spectra, as of the day of writing of this notebook, the JWST calibration pipeline does not have a spectral extraction algorithm that can deal with the complicated structure of NIRISS/SOSS data. In particular, the SUBSTRIP256 subarray has data from at least two NIRISS/SOSS orders, which overlap at the reddest wavelengths. An algorithm to properly extract this data is in-the-making, but in this notebook we will be performing our own tracing and extraction of the spectra in order to showcase how to analyze the JWST pipeline products. After extracting the spectrum, we will generate the white-light lightcurves of each order, fit them and compare the extracted parameters to the ones in the literature. Then, we will repeat the procedures for the wavelength-dependent lightcurves, with which we will get the transmission spectrum of the exoplanet.
Before we begin, let's import some libraries:
# General internal libraries:
import os
import numpy as np
import pickle
import matplotlib.pyplot as plt
from numpy.polynomial import chebyshev
from scipy.ndimage import gaussian_filter1d
from mpl_toolkits.axes_grid1.inset_locator import mark_inset
from scipy.interpolate import interp1d
# Libraries for plotting, reading data:
import seaborn as sns
sns.set_style("ticks") # Set seaborn "ticks" style for better styled plots
from astropy.io import fits
from astropy.utils.data import download_file
# Library for some power-spectral density analysis:
from astropy.timeseries import LombScargle
# Useful library to obtain wavelength map solution from the JWST pipeline:
from jwst import datamodels
# Corner (for posterior distribution plotting):
import corner
# Juliet (for transit fitting & model evaluation:)
import juliet
Before we can go ahead and perform spectral extraction of the orders, we need to trace them in order to guide the spectral extraction algorithm on where the spectrum is. Let's first do some data exploration to understand how we might do this with the complicated spectral profile of NIRISS/SOSS.
The products with which we will be working with are the so-called "rates". These combine the information of all the groups in a given integration into one number, which is the slope of the up-the-ramp samples in a given integration (so they have units of counts per second). These calibrated data have been already corrected by various inhomogeneities present in the raw JWST data, such as bias and dark current. Let's download them (or load them if they are already on the same folder as this notebook), along with some supplementary data we will be using for this notebook --- if downloading, be patient, as this might take a while (the calibrated data are 12 GB worth of data!):
base_data_url = 'https://data.science.stsci.edu/redirect/JWST/jwst-data_analysis_tools/soss-transit-spectroscopy'
transit_model_path = 'hp1_model.txt'
transit_fits_results_path = 'transit_spectra_results.pkl'
if not os.path.exists('calibrated_data_hatp1_transit.fits'):
file_path = download_file(base_data_url + '/data/calibrated_data_hatp1_transit.fits')
os.rename(file_path, 'calibrated_data_hatp1_transit.fits')
if not os.path.exists(transit_model_path):
file_path = download_file(base_data_url + '/data/hp1_tspec.dat')
os.rename(file_path, transit_model_path)
if not os.path.exists(transit_fits_results_path):
file_path = download_file(base_data_url + '/data/transit_spectra_results.pkl')
os.rename(file_path, transit_fits_results_path)
Let's open the calibrated data, and explore its format and content:
hdul = fits.open('calibrated_data_hatp1_transit.fits')
hdul.info()
Filename: calibrated_data_hatp1_transit.fits No. Name Ver Type Cards Dimensions Format 0 PRIMARY 1 PrimaryHDU 152 () 1 SCI 1 ImageHDU 17 (2048, 256, 1198) float64 2 ERR 1 ImageHDU 11 (2048, 256, 1198) float32 3 DQ 1 ImageHDU 12 (2048, 256, 1198) int32 (rescales to uint32) 4 INT_TIMES 1 BinTableHDU 23 1198R x 7C [J, D, D, D, D, D, D] 5 VAR_POISSON 1 ImageHDU 10 (2048, 256, 1198) float32 6 VAR_RNOISE 1 ImageHDU 10 (2048, 256, 1198) float32 7 ASDF 1 BinTableHDU 11 1R x 1C [50439876B]
The data we will be mostly interested in working with is mainly on the SCI and ERR keys --- the former has the science data (the "rates") and the latter their errors. There is a lot of extra information in these pipeline products (e.g., the wavelength solution is here, as well as the time-stamps of the observations), but we will get to that in time. As can be seen, the SCI and ERR keys have three dimensions: the 2048 and 256 dimensions are the spatial ones (i.e., the 2D spectral dimensions). 1198, on the other hand, is the number of integrations of this particular simulated observation. This latter dimension is specific to time-series observations data/analysis, as here we are interested in using the rates for each integration separately.
Let's extract the science data (SCI) and errors (ERR) then:
data = hdul['SCI'].data
errors = hdul['ERR'].data
There are several ways moving forward to trace the spectrum. In theory, when JWST is on the sky, precise shapes and positions will be available for them, and we could use those. Here, however, we will trace each integration individually, in order to check for possible movements/shape changes of the trace during the observations. Before doing this, however, let's generate a "median" image by collapsing all the products in time, so as to have a view of the "average" shape of the traces along the entire exposure:
median_image = np.median(data, axis=0)
Let's plot it:
plt.figure(figsize=(20, 2))
im = plt.imshow(median_image)
im.set_clim(-5, 20)
cb = plt.colorbar()
cb.ax.set_ylabel('Rates (Counts/s)')
Text(0, 0.5, 'Rates (Counts/s)')
This plot tells us a lot about how to move forward with the data analysis in general of this dataset. For example, note that although the pipeline has done a good job at removing structure in the data, there is still some structure which appears as vertical strips. This is a well known pattern due to the readout of IR detectors which is typically referred to as "1/f noise". By eye it seems some simple column-by-column background substraction should take care of that. On top of this, this image tells us a little bit of the care we have to have with tracing. For example, spectra of order 1 and 2 start to overlap around pixel ~750. In addition, we can see how the right side of both traces are the ones that have the most signal, which decreases to the left-hand side (redder wavelengths) --- so if we do any kind of iterative tracing, we should perhaps start in those sides of the trace. With tracing we also have to be careful with the sides of the image: both sides have 4 reference pixels each (i.e, pixels 0 to 3 and 2044 to 2048) which are not sensitive to light, so we have to be careful to leave those out of our analyses. Finally, it is important to note that the order in the bottom of this image (order 2) ends rather abruptly at around column 1750; this is actually a problem of the simulations, and not a real feature of the NIRISS/SOSS traces.
Let's use all of the above to our advantage to trace the spectra of each integration. First, let's write a small script to get the centroids of each column for each order on each integration. Here we'll use a very simple algorithm in which we convolve each column with a gaussian filter (in order to smooth the profile, which is rather complex), and then take the centroiding of that filtered image. We repeat this procedure in each of the columns of the image in order to obtain the centroids of the traces, and then we fit them with a polynomial:
def trace_spectrum(image, xstart, ystart, profile_radius=30, gauss_filter_width=10, xend=None):
"""
Function that non-parametrically traces NIRISS/SOSS spectra. First, to get the centroid at xstart and
ystart, it convolves the spatial profile with a gaussian filter, finding its peak through usual flux-weighted
centroiding. Next, this centroid is using to find the one left to it through the same algorithm.
Parameters
----------
image: ndarray
The image that wants to be traced.
xstart: float
The x-position (column) on which the tracing algorithm will be started
ystart: float
The estimated y-position (row) of the center of the trace. An estimate within 10-20 pixels is enough.
profile_radius: float
Expected radius of the profile measured from its center. Only this region will be used to estimate
the centroids of the spectrum.
gauss_filter_width: float
Width of the gaussian filter used to perform the centroiding of the first column
xend: int
x-position at which tracing ends. If none, trace all the columns left to xstart.
"""
# Define x-axis:
if xend is not None:
x = np.arange(xend, xstart)
else:
x = np.arange(0, xstart)
# Define y-axis:
y = np.arange(image.shape[0])
# Define array that will save centroids at each x:
ycentroids = np.zeros(len(x))
for i in range(len(x))[::-1]:
xcurrent = x[i]
# Convolve column with a gaussian filter; remove median before convolving:
filtered_column = gaussian_filter1d(image[:,xcurrent] - np.median(image[:,xcurrent]), gauss_filter_width)
# Find centroid within profile_radius pixels of the initial guess:
idx = np.where(np.abs(y-ystart)<profile_radius)[0]
ycentroids[i] = np.sum(y[idx]*filtered_column[idx])/np.sum(filtered_column[idx])
ystart = ycentroids[i]
return x, ycentroids
Let's try our algorithm first for order 1 (the one on the top). Let's start it on column 2043 (as columns larger than that have reference pixels, so no signal) which, by eye, has the center of the profile around pixel 70 in the median image. Let's do it for all integrations, and plot all of them on top of the median image to visually see our results:
# Define variables for the tracing algorithm:
nintegrations = data.shape[0]
# Choose the algorithm to start and end so we don't include reference pixels:
xstart = 2043
xend = 4
ystart = 70
# Prepare arrays that will save our results:
X1 = np.zeros([nintegrations, xstart-xend])
Y1 = np.zeros([nintegrations, xstart-xend])
# Prepare median image plot, on top of which we'll show the traces:
plt.figure(figsize=(17, 12))
im = plt.imshow(median_image)
im.set_clim(-5, 20)
# Iterate tracing through all integrations:
for i in range(nintegrations):
X1[i,:], Y1[i,:] = trace_spectrum(data[i,:,:], xstart, ystart, xend = 4)
plt.plot(X1[i,:], Y1[i,:], color='cornflowerblue', lw=3, alpha=0.1)
Woah, pretty good for such a simple approach! There are evidently some parts of the traces that go a bit off-board. In particular, the trace goes off at values below around 30. Let's fit a Chebyshev polynomial for each of them so we can smooth the shape a little bit, being careful not to use these pixels that throw the trace off in the fit. We could do some iterative fitting but, as we will see below, this is not terribly important. To select the best order for this polynomial, let's write a small function that does model selection using the Bayesian Information Criterion (BIC) for each trace, and then select the best order via "majority vote":
def select_cheby_order(x, y, min_order, max_order):
"""
This function selects (and returns) the optimal order of a Chebyshev polynomial using the BIC.
Parameters
----------
x: ndarray
Array with the regressors
y: ndarray
Array with the data
min_order: int
Minimum order to try
max_order: int
Maximum order to try
"""
orders = np.arange(min_order, max_order)
bics = np.zeros(len(orders))
# Fit only non-nans:
idx = np.where(~np.isnan(y))[0]
n = len(idx)
for i in range(len(orders)):
order = orders[i]
coeffs = chebyshev.chebfit(x[idx], y[idx], deg=order)
RSS = np.sum((y - chebyshev.chebval(x[idx], coeffs))**2)
bics[i] = n * np.log(RSS / n) + (order + 1) * np.log(n)
idx = np.where(np.min(bics) == bics)[0]
return orders[idx][0]
# Try orders from 1 to 30 in the polynomial for all the traces:
orders = np.zeros(nintegrations)
for i in range(nintegrations):
orders[i] = select_cheby_order(X1[i,30:], Y1[i,30:], 1, 30)
# Select best order:
order = int(np.median(orders))
print('Best order was: ', order)
# Use the best-one as deemed by the BIC to fit all the traces; plot them with median image on top:
plt.figure(figsize=(17, 12))
im = plt.imshow(median_image)
im.set_clim(-5, 20)
coeffs1 = np.zeros([nintegrations, order+1])
for i in range(nintegrations):
# Fit only non-nans:
idx = np.where(~np.isnan(Y1[i,30:]))
coeffs1[i,:] = chebyshev.chebfit(X1[i,30:][idx], Y1[i,30:][idx], deg=order)
plt.plot(X1[i,30:], chebyshev.chebval(X1[i,30:], coeffs1[i,:]), lw=3)
Best order was: 24
That's pretty good!
Now, we can see how the same procedure is deemed to fail for order 2 (the lower trace) for columns below around 1000. This is because this order quickly starts to overlap with order 1 below this. We will help our algorithm a little bit then by masking the upper trace, and only fitting the trace for values above column 1000. To this end, we will use the same algorithm for obtaining the centroids of the trace but will pass a masked image, containing only the lower trace. To do this, we will multiply all the values below row 80 (i.e., the upper pixels in the plot above) by zero.
We'll start our algorithm in column 1839 (the edge of the trace here), which has its centroid, by eye, around row 225. We also make sure to stop tracing for pixels smaller than 1000. Let's see how we do:
mask = np.zeros(median_image.shape)
mask[80:,:] = np.ones(mask[80:,:].shape)
xstart = 1839
ystart = 225
xend = 1000
# Prepare arrays that will save our results for order 2:
X2 = np.zeros([nintegrations, xstart-xend])
Y2 = np.zeros([nintegrations, xstart-xend])
# Prepare median image plot, on top of which we'll show the traces:
plt.figure(figsize=(17, 12))
im = plt.imshow(median_image)
im.set_clim(-5, 20)
# Iterate tracing through all integrations:
for i in range(nintegrations):
X2[i,:], Y2[i,:] = trace_spectrum(data[i,:,:]*mask, xstart, ystart, xend=xend)
plt.plot(X2[i,:], Y2[i,:], color='cornflowerblue', lw=3, alpha=0.1)
Pretty good! Let's smooth the trace, as done for Order 1, with a Chebyshev polynomial. Let's repeat the same procedure:
# Try orders from 1 to 30 in the polynomial for all the traces:
orders = np.zeros(nintegrations)
for i in range(nintegrations):
orders[i] = select_cheby_order(X2[i,:], Y2[i,:], 1, 30)
# Select best order:
order = int(np.median(orders))
print('Best order was: ', order)
# Use the best-one as deemed by the BIC to fit all the traces; plot them with median image on top:
plt.figure(figsize=(17, 12))
im = plt.imshow(median_image)
im.set_clim(-5, 20)
coeffs2 = np.zeros([nintegrations, order+1])
for i in range(nintegrations):
# Fit only non-nans:
idx = np.where(~np.isnan(Y2[i,:]))
coeffs2[i,:] = chebyshev.chebfit(X2[i,:][idx], Y2[i,:][idx], deg=order)
plt.plot(X2[i,:], chebyshev.chebval(X2[i,:], coeffs2[i,:]), lw=3)
Best order was: 15
This looks great! Let's do a final plot showing the traces corresponding to the first integration on the median image:
plt.figure(figsize=(17, 12))
im = plt.imshow(median_image)
im.set_clim(-5, 20)
plt.plot(X1[0,:], chebyshev.chebval(X1[i,:], coeffs1[0,:]), lw=3, label='Order 1 trace', color = 'orangered')
plt.plot(X2[0,:], chebyshev.chebval(X2[i,:], coeffs2[0,:]), lw=3, label='Order 2 trace', color = 'cornflowerblue')
plt.legend()
<matplotlib.legend.Legend at 0x7f0c9beb3690>
That's pretty good for such a simple procedure!
How does the trace position change as a function of each integration? Was our integration-by-integration tracing really worthwile? Let's map this out using the location of the center of the trace as a proxy of the trace shift for each order; let's use the same (uncontaminated) column for both orders:
# Column 1500 seems pretty uncontaminated
x1_median, y1_shift = 1500, np.zeros(nintegrations)
x2_median, y2_shift = 1500, np.zeros(nintegrations)
for i in range(nintegrations):
y1_shift[i] = chebyshev.chebval(x1_median, coeffs1[i,:])
y2_shift[i] = chebyshev.chebval(x2_median, coeffs2[i,:])
plt.figure(figsize=(14, 3))
shift1 = y1_shift-np.median(y1_shift)
shift2 = y2_shift-np.median(y2_shift)
sigma_y1 = np.sqrt(np.var(shift1))
sigma_y2 = np.sqrt(np.var(shift2))
plt.plot(np.arange(nintegrations)+1, shift1, color='orangered',
label=f'Order 1 shifts (RMS={round(sigma_y1, 3)} px)')
plt.plot(np.arange(nintegrations)+1, shift2, color='cornflowerblue',
label=f'Order 2 shifts (RMS={round(sigma_y2, 3)} px)')
plt.legend()
plt.xlabel('Integration number')
plt.ylabel('Trace shift (pixel)')
plt.xlim([1, nintegrations])
(1.0, 1198.0)
It appears that the impact of the tracing in the case of the simulations is, indeed, very small. The pixel shifts of the traces are about 1/500 - 1/1000 of a pixel. Consequently, at least in the simulations outlined here, we pressume these will not have a big impact on the retrieved lightcurves and transmission spectrum.
With our traces at hand, in theory we can now perform simple extraction of the spectra on each integration. Before doing that, however, let's correct our images for the 1/f-noise patters on a column-by-column basis using the fact that we now know where the spectra is located, so we can mask the traces out of this procedure.
To this end, for each image we will mask all the pixels in a 30-pixel radius around the traces (more or less the radius of the actual portion of the trace that contains flux from the target), and use the remaining pixels to track the column-to-column variations. Let's apply these corrections to the data:
# We will save the corrected data in a new array, as to keep track of the original dataset:
corrected_data = np.copy(data)
radius = 30
for i in range(nintegrations):
for j in range(data.shape[2]):
# Create mask that will turn to zero values not to be used for background estimation:
mask = np.ones(data.shape[1])
if j in X1[i,:]:
y1 = int(chebyshev.chebval(j, coeffs1[i,:]))
mask[y1 - radius : y1 + radius] = 0.
if j in X2[i,:]:
y2 = int(chebyshev.chebval(j, coeffs2[i,:]))
mask[y2 - radius : y2 + radius] = 0.
# Use only pixels that are not zero to calculate background through median:
idx = np.where(mask != 0)[0]
corrected_data[i,:,j] = corrected_data[i,:,j] - np.median(corrected_data[i,idx,j])
All right, let's now see how we did --- to this end, let's check the corrections made on the first integration:
plt.figure(figsize=(17, 12))
plt.title('Non-corrected image')
im = plt.imshow(data[0,:,:])
im.set_clim(-5, 10)
plt.figure(figsize=(17, 12))
plt.title('Corrected image')
im = plt.imshow(corrected_data[0,:,:])
im.set_clim(-5, 10)
Much better! The strips observed in the original image have been corrected quite successfully thanks to our procedure.
Let's now move forward and write a small script that is able to do simple aperture extraction given a set of x and y coordinates that follow the trace, and loop that through all of our integrations. In theory before doing this we would take care of bad pixels/cosmic rays, but we don't worry about this on this notebook because (a) cosmic rays have not been included in the simulations made to create this dataset and (b) bad pixels are the same on each image, so they don't impact any time-varying signal.
First, the aperture extraction function:
def aperture_extraction(image, x, y, aperture_radius, background_radius=50, error_image=None, correct_bkg=True):
"""
This function takes as inputs two arrays (x,y) that follow the trace,
and returns the added flux over the defined aperture radius (and its error, if an error image
is given as well), substracting in the way any background between the aperture radius and the
background radius. The background is calculated by taking the median of the points between the
aperture_radius and the background_radius.
Parameters
----------
image: ndarray
Image from which the spectrum wants to be extracted
x: ndarray
Array with the x-axis of the trace (i.e., the columns, wavelength direction)
y: ndarray
Array with the y-axis of the trace (i.e., rows, spatial direction)
aperture_radius: float
Distance from the center of the trace at which you want to add fluxes.
background_radius: float
Distance from the center of the trace from which you want to calculate the background. The
background region will be between this radius and the aperture_radius.
error_image: ndarray
Image with the errors of each pixel value on the image ndarray above
correct_bkg: boolean
If True, apply background correction. If false, ommit this.
"""
# Create array that will save our fluxes:
flux = np.zeros(len(x))
if error_image is not None:
flux_error = np.zeros(len(x))
max_column = image.shape[0]
for i in range(len(x)):
# Cut the column with which we'll be working with:
column = image[:,int(x[i])]
if error_image is not None:
variance_column = error_image[:,int(x[i])]**2
# Define limits given by the aperture_radius and background_radius variables:
if correct_bkg:
left_side_bkg = np.max([y[i] - background_radius, 0])
right_side_bkg = np.min([max_column, y[i] + background_radius])
left_side_ap = np.max([y[i] - aperture_radius, 0])
right_side_ap = np.min([max_column, y[i] + aperture_radius])
# Extract background, being careful with edges:
if correct_bkg:
bkg_left = column[np.max([0, int(left_side_bkg)]) : np.max([0, int(left_side_ap)])]
bkg_right = column[np.min([int(right_side_ap), max_column]) : np.max([int(right_side_bkg), max_column])]
bkg = np.median(np.append(bkg_left, bkg_right))
else:
bkg = 0.
# Substract it from the column:
column -= bkg
# Perform aperture extraction of the background-substracted column, being careful with pixelization
# at the edges. First, deal with left side:
l_decimal, l_integer = np.modf(left_side_ap)
l_integer = int(l_integer)
if l_decimal < 0.5:
l_fraction = (0.5 - l_decimal) * column[l_integer]
l_limit = l_integer + 1
if error_image is not None:
l_fraction_variance = ((0.5 - l_decimal)**2) * variance_column[l_integer]
else:
l_fraction = (1. - (l_decimal - 0.5)) * column[l_integer + 1]
l_limit = l_integer + 2
if error_image is not None:
l_fraction_variance = ((1. - (l_decimal - 0.5))**2) * variance_column[l_integer + 1]
# Now right side:
r_decimal, r_integer = np.modf(right_side_ap)
r_integer = int(r_integer)
if r_decimal < 0.5:
r_fraction = (1. - (0.5 - r_decimal)) * column[r_integer]
r_limit = r_integer
if error_image is not None:
r_fraction_variance = ((1. - (0.5 - r_decimal))**2) * variance_column[r_integer]
else:
r_fraction = (r_decimal - 0.5) * column[r_integer + 1]
r_limit = r_integer + 1
if error_image is not None:
r_fraction_variance = ((r_decimal - 0.5)**2) * variance_column[r_integer + 1]
# Save total flux in current column:
flux[i] = l_fraction + r_fraction + np.sum(column[l_limit:r_limit])
if error_image is not None:
# For the flux error, ommit edge values (contribution to total variance is small nonetheless):
flux_error[i] = np.sqrt(np.sum(variance_column[l_limit:r_limit]) + l_fraction_variance + \
r_fraction_variance)
if error_image is not None:
return flux, flux_error
else:
return flux
Let us now extract the spectra. To this end, we have to define an aperture radius for the extraction --- we decide to use a 30-pixel aperture extraction, and a 50-pixel background radius. As shown in the following plot, these regions cover both the spectra and also a big chunk of the background region(s):
plt.figure(figsize=(17, 12))
im = plt.imshow(median_image)
im.set_clim(-5, 20)
plt.plot(X1[0,:], chebyshev.chebval(X1[0,:], coeffs1[0,:]), lw=3, label='Order 1 trace', color='orangered')
plt.plot(X2[0,:], chebyshev.chebval(X2[0,:], coeffs2[0,:]), lw=3, label='Order 2 trace', color='cornflowerblue')
plt.fill_between(X1[0,:], chebyshev.chebval(X1[0,:], coeffs1[0,:]) + 30, chebyshev.chebval(X1[0,:], coeffs1[0,:]) - 30,
color='cornflowerblue', alpha=0.9)
plt.fill_between(X2[0,:], chebyshev.chebval(X2[0,:], coeffs2[0,:]) + 30, chebyshev.chebval(X2[0,:], coeffs2[0,:]) - 30,
color='orangered', alpha=0.9)
plt.ylim(data.shape[1], 0)
plt.legend()
<matplotlib.legend.Legend at 0x7f0c9be6b5d0>
Now let's loop over all the integrations, extract the spectra of both orders, and save that to some dictionaries. We note that this will take a while (around 10 minutes). We will save both the spectra and the columns, so we can later relate the latter to wavelength-space:
# Extraction parameters:
extraction_aperture = 30
background_aperture = 50
# Create dictionary:
spectra = {}
# Generate sub-dictionaries for each order:
spectra['order1'], spectra['order2'] = {}, {}
# Save the X positions for both orders. X positions are the same for all integrations, so
# we save the ones corresponding to the first integration:
spectra['order1']['x'], spectra['order2']['x'] = X1[0,:], X2[0,:]
# Create sub-dictionaries that will save the fluxes and the errors on those fluxes:
spectra['order1']['flux'], spectra['order2']['flux'] = np.zeros([data.shape[0], len(X1[0,:])]),\
np.zeros([data.shape[0], len(X2[0,:])])
spectra['order1']['flux_errors'], spectra['order2']['flux_errors'] = np.zeros([data.shape[0], len(X1[0,:])]),\
np.zeros([data.shape[0], len(X2[0,:])])
# Now iterate through all integrations:
for i in range(nintegrations):
# Trace order 1:
y1 = chebyshev.chebval(X1[0,:], coeffs1[i,:])
# Extract order 1:
spectra['order1']['flux'][i,:], spectra['order1']['flux_errors'][i,:] = \
aperture_extraction(corrected_data[i,:,:], X1[0,:], y1,
extraction_aperture,
error_image=errors[i,:,:],
correct_bkg=False)
# Same for Order 2:
y2 = chebyshev.chebval(X2[0,:],coeffs2[i,:])
spectra['order2']['flux'][i,:], spectra['order2']['flux_errors'][i,:] = \
aperture_extraction(corrected_data[i,:,:], X2[0,:], y2,
extraction_aperture,
error_image=errors[i,:,:],
correct_bkg=False)
Finally, let's plot the spectra of the first integration along with the errorbars:
i = 0
fig, ax = plt.subplots(figsize=(15, 5))
ax.set_title('Order 1 spectrum for the first integration')
ax.errorbar(spectra['order1']['x'], spectra['order1']['flux'][i,:], \
yerr=spectra['order1']['flux_errors'][i,:], color='cornflowerblue')
ax.set_xlim(np.min(spectra['order1']['x']), np.max(spectra['order1']['x']))
ax.set_xlabel('Pixel column')
ax.set_ylabel('Counts/s')
plt.figure(figsize=(15, 5))
plt.title('Order 2 spectrum for the first integration')
plt.errorbar(spectra['order2']['x'], spectra['order2']['flux'][i,:], \
yerr=spectra['order2']['flux_errors'][i,:], color='orangered')
plt.xlabel('Pixel column')
plt.ylabel('Counts/s')
Text(0, 0.5, 'Counts/s')
This is pretty good! It is interesting to see how we can actually identify where the contamination from the second order kicks in in the Order 1 spectra (blue) around pixel ~750. This is actually evident from the images above, and something to keep in mind when performing analyses with this data.
Note: the flattening of the spectra for pixels above ~1700 in order 2 is a bug from the simulator.
Before continuing to the next step, we need to extract two extra data products that will become useful in our analyses in this notebook: (a) the time-stamps of each integration and (b) the wavelength solution corresponding to each pixel in the frame for Order 1 and Order 2.
The first is the easiest to extract --- these will be typically stored in the very same data products of the pipeline we are using here. In particular, recall that each integration is composed of various groups which sample up-the-ramp. These have been, in turn, combined in the rates that we see here. Perhaps the most useful timing measure for those ramps, thus, is the middle of an integration --- these can be extracted from the products as follows:
spectra['times'] = hdul['INT_TIMES'].data['int_mid_BJD_TDB']
As for the wavelength solution, there is a particular step in the JWST calibration pipeline that "attaches" this data to the products --- these have been attached to our dataset as well. To extract them, one has to re-open the pipeline products with a so-called datamodel from the JWST pipeline, that lets you gain access to this information:
exposure = datamodels.SpecModel('calibrated_data_hatp1_transit.fits')
# Get number of rows and columns on the first integration:
rows,columns = data[0,:,:].shape
# Prepare map that will save the wavelength corresponding to each pixel in the frame:
wmap = np.zeros([2, rows, columns])
# Save it:
for order in [1,2]:
for row in range(rows):
for column in range(columns):
wmap[order - 1, row, column] = exposure.meta.wcs(column, row, order)[-1]
Let's visualize this 2D wavelength solution for Order 1 and 2:
plt.figure(figsize=(20, 2))
plt.title('Order 1 wavelength map')
im1 = plt.imshow(wmap[0,:,:])
im1.set_clim(0.6, 3.0)
im1.set_cmap('Reds')
cb1 = plt.colorbar()
plt.figure(figsize=(20, 2))
plt.title('Order 2 wavelength map')
im2 = plt.imshow(wmap[1,:,:])
im2.set_clim(0.6, 1.5)
im2.set_cmap('Blues')
cb2 = plt.colorbar()
One important caveat to have in mind is that the wavelength maps are not strictly vertical, i.e., wavelengths do not align perfectly with the columns of the image for NIRISS/SOSS observations. An easy way to see this is to have an image showing "iso-wavelength" bands --- identify pixels that have the same wavelengths and "paint" them in a plot. Let's do this for Order 1:
plt.figure(figsize=(20, 25))
Z = np.zeros(wmap[0,:,:].shape)
for w in np.linspace(0.5, 3., 100):
wmin,wmax = w,w+0.005
idx = (wmap[0,:,:] > wmin) & (wmap[0,:,:] < wmax)
Z[idx] = 1.
plt.imshow(Z, interpolation=None)
<matplotlib.image.AxesImage at 0x7f0c9c89bb10>
As can be seen from these maps, this is not extremely critical for NIRISS/SOSS (iso-wavelength bands span ~3 pixels), but it might be important for precise, higher-resolution work to take this into account in the extraction. For our application here, however, we simply take the average wavelength value per column to associate wavelengths with pixels for each order, and we save that to our dictionary:
avg_waves = np.mean(wmap, axis=1)
spectra['order1']['w'], spectra['order2']['w'] = avg_waves[0,spectra['order1']['x'].astype('int')],\
avg_waves[1,spectra['order2']['x'].astype('int')]
Let's have a final look at the extracted spectra of the first 100 integrations, but now with these wavelengths as x-axis instead of the pixel columns:
plt.figure(figsize=(15, 5))
for i in range(100):
plt.plot(spectra['order1']['w'], spectra['order1']['flux'][i,:], color='cornflowerblue', alpha=0.5)
plt.plot(spectra['order2']['w'], spectra['order2']['flux'][i,:], color='orangered', alpha=0.5)
plt.xlabel('Wavelength (um)')
plt.xlim([0.65, 2.83])
plt.ylabel('Counts/s')
Text(0, 0.5, 'Counts/s')
That looks pretty good! Let's deep dive into analyzing the lightcurves that can be extracted from these spectra in the next sections.
Having extracted our spectra, we are ready to jump into the "fun" part of this notebook: the analyses of actual simulated NIRISS/SOSS transit lightcurves. Let's first create the white-light lightcurves of both Order 1 and Order 2 by summing the spectra extracted in the previous section, but on regions on which we know there will be no contamination from overlapping orders. This step is important because the white-light lightcurves help determine the most precise transit parameters for the wavelength-dependent lightcurves, and thus we want to have estimates that are as unbiased as possible.
From the figures above, it seems that for Order 1 pixels below around 300 and above 1300 have virtually no contamination from Order 2. For Order 2, fluxes from pixels above around 1300 also have virtually no contamination from Order 1. Let's generate the corresponding lightcurves taking that into account:
# Extract order 1 length, create array that will save white-light lightcurve (and errors):
NT1 = spectra['order1']['flux'].shape[0]
lc_order1 = np.zeros(NT1)
lc_errors_order1 = np.zeros(NT1)
# Same for order 2:
NT2 = spectra['order2']['flux'].shape[0]
lc_order2 = np.zeros(NT2)
lc_errors_order2 = np.zeros(NT2)
# Indexes of uncontaminated spectra for Order 1 and 2:
idx_uncontaminated1 = np.where((spectra['order1']['x'] < 300)|(spectra['order1']['x'] > 1300))[0]
idx_uncontaminated2 = np.where(spectra['order2']['x'] > 1300)[0]
# Sum the fluxes and errors for each order. First, order 1:
for i in range(NT1):
lc_order1[i] = np.sum(spectra['order1']['flux'][i, idx_uncontaminated1])
lc_errors_order1[i] = np.sqrt(np.sum(spectra['order1']['flux_errors'][i, idx_uncontaminated1]**2))
# Now order 2:
for i in range(NT2):
lc_order2[i] = np.sum(spectra['order2']['flux'][i, idx_uncontaminated2])
lc_errors_order2[i] = np.sqrt(np.sum(spectra['order2']['flux_errors'][i, idx_uncontaminated2]**2))
# Save median-normalized lightcurves and errors:
median_lc1, median_lc2 = np.median(lc_order1), np.median(lc_order2)
spectra['order1']['white-light'] = lc_order1 / median_lc1
spectra['order1']['white-light_errors'] = lc_errors_order1 / median_lc1
spectra['order2']['white-light'] = lc_order2 / median_lc2
spectra['order2']['white-light_errors'] = lc_errors_order2 / median_lc2
# Write median errors in ppm:
med_err_order1 = np.median(spectra['order1']['white-light_errors'])*1e6
med_err_order2 = np.median(spectra['order2']['white-light_errors'])*1e6
# Save variable with times in units of hours since beggining of observation:
thours = (spectra['times'] - spectra['times'][0]) * 24
# Plot lightcurve
fig, ax = plt.subplots(figsize=(15, 5))
ax.set_title('White-light lightcurve of HAT-P-1b NIRISS/SOSS observations')
ax.errorbar(thours, spectra['order1']['white-light'],
yerr=spectra['order1']['white-light_errors'],
color='orangered',
fmt='.',
label=f'Order 1 ($\sigma={round(med_err_order1, 1)}$ ppm)')
ax.errorbar(thours, spectra['order2']['white-light'],
yerr=spectra['order2']['white-light_errors'],
color='cornflowerblue',
fmt='.',
label=f'Order 2 ($\sigma={round(med_err_order2, 1)}$ ppm)')
# Define legend, limits, labels:
ax.legend()
ax.set_xlim(np.min(thours), np.max(thours))
ax.set_xlabel('Time since start of observations (hours)')
ax.set_ylabel('Relative flux')
# Plot inset to see errorbars:
axins = ax.inset_axes([0.04, 0.5, 0.25, 0.3])
x1, x2, y1, y2 = 1.5, 2.0, 0.9998, 1.0004
axins.set_xlim(x1, x2)
axins.set_ylim(y1, y2)
axins.set_xticklabels('')
axins.set_yticklabels('')
axins.errorbar(thours,spectra['order1']['white-light'],
yerr=spectra['order1']['white-light_errors'],
color='orangered',
fmt='.')
axins.errorbar(thours,spectra['order2']['white-light'],
yerr=spectra['order2']['white-light_errors'],
color='cornflowerblue',
fmt='.')
mark_inset(ax, axins, loc1=1, loc2=2, linewidth=0.7, fc="None", ec='k', alpha=0.4, clip_on=True, zorder=3)
(<mpl_toolkits.axes_grid1.inset_locator.BboxPatch at 0x7f0bf8808b10>, <mpl_toolkits.axes_grid1.inset_locator.BboxConnector at 0x7f0c9c726790>, <mpl_toolkits.axes_grid1.inset_locator.BboxConnector at 0x7f0bf851d890>)
findfont: Font family ['STIXGeneral'] not found. Falling back to DejaVu Sans. findfont: Font family ['STIXGeneral'] not found. Falling back to DejaVu Sans. findfont: Font family ['STIXGeneral'] not found. Falling back to DejaVu Sans. findfont: Font family ['STIXNonUnicode'] not found. Falling back to DejaVu Sans. findfont: Font family ['STIXNonUnicode'] not found. Falling back to DejaVu Sans. findfont: Font family ['STIXNonUnicode'] not found. Falling back to DejaVu Sans. findfont: Font family ['STIXSizeOneSym'] not found. Falling back to DejaVu Sans. findfont: Font family ['STIXSizeTwoSym'] not found. Falling back to DejaVu Sans. findfont: Font family ['STIXSizeThreeSym'] not found. Falling back to DejaVu Sans. findfont: Font family ['STIXSizeFourSym'] not found. Falling back to DejaVu Sans. findfont: Font family ['STIXSizeFiveSym'] not found. Falling back to DejaVu Sans. findfont: Font family ['cmsy10'] not found. Falling back to DejaVu Sans. findfont: Font family ['cmr10'] not found. Falling back to DejaVu Sans. findfont: Font family ['cmtt10'] not found. Falling back to DejaVu Sans. findfont: Font family ['cmmi10'] not found. Falling back to DejaVu Sans. findfont: Font family ['cmb10'] not found. Falling back to DejaVu Sans. findfont: Font family ['cmss10'] not found. Falling back to DejaVu Sans. findfont: Font family ['cmex10'] not found. Falling back to DejaVu Sans. findfont: Font family ['DejaVu Sans Display'] not found. Falling back to DejaVu Sans.
Wow! Those look pretty good. There is a notable difference in the overall lightcurve shape here. Part of it can be attributed to limb-darkening (being Order 1 the order covering the reddest wavelengths, limb-darkening produces a more "box-shaped" lightcurve than for Order 2) --- but a portion of it could also be explained by the transit depths themselves.
Let's now fit those transit lightcurves. We will fit them separately to see what we obtain from each order, compare and discuss the results. To perform the transit fitting, users can of course use any tool they want. In this notebook, we will use juliet (http://juliet.readthedocs.io/), which is a tool that allows to perform lightcurve fitting through Nested Sampling algorithms, so we can thoroughly explore the parameter space.
We first import juliet and write the priors for the parameters of our fit. We will leave the period, $P$ (P_p1) of the orbit fixed. The parameters we will be fitting for are the time-of-transit center $t_0$ (t0_p1), two parameters that parametrize the planet-to-star radius ratio $R_p/R_*$ and the impact parameter of the orbit $b = (a/R_*)\cos i$ in a unitary uniform plane, $r_1$ (r1_p1) and $r_2$ (r2_p1 --- see Espinoza (2018) for details), two parameters that parametrize the limb-darkening through a square-root limb-darkening law, $q_1$ (q1_SOSS) and $q_2$ (q2_SOSS --- see Kipping (2013), the stellar density $\rho_*$ (rho), a scaling normalization factor for the out-of-transit flux (mflux_SOSS) and an unknown standard deviation added in quadrature to the errorbars of our data, sigma_w_SOSS. On top of this, we fix some parameters in our fit as well: we don't fit for the eccentricity $e$ (ecc_p1) or argument of periastron passage $\omega$ (omega_p1), as a single transit does not hold much information on these parameters. Similarly, we fix any possible dilution of the transit lightcurve by background sources (mdilution_SOSS) to 1, so we assume no dilution is in place. We take very wide priors for all the parameters that are left as free parameters in our fit except for the period, which we assume is well-known:
# Name of the parameters to be fit:
params = ['P_p1', 't0_p1', 'r1_p1', 'r2_p1', 'q1_SOSS', 'q2_SOSS', 'ecc_p1', 'omega_p1',
'rho', 'mdilution_SOSS', 'mflux_SOSS', 'sigma_w_SOSS']
# Distributions:
dists = ['fixed', 'normal', 'uniform', 'uniform', 'uniform', 'uniform', 'fixed', 'fixed',
'loguniform', 'fixed', 'normal', 'loguniform']
# Hyperparameters
hyperps = [4.4652998, [2459775.89,0.1], [0.,1], [0.,1.], [0., 1.], [0., 1.], 0.0, 90.,
[100., 10000.], 1.0, [0.,0.1], [0.1, 1000.]]
priors = juliet.generate_priors(params, dists, hyperps)
Having defined the parameters, priors and the hyperparameters of those priors, we now fit both datasets. For this, we pass the data in a juliet-friendly format, and fit each transit individually:
for order in ['order1', 'order2']:
# Define times, fluxes and errors in a juliet-friendly format:
times, fluxes, fluxes_error, norm_times = {}, {}, {}, {}
times['SOSS'], fluxes['SOSS'], fluxes_error['SOSS'] = [spectra['times'],
spectra[order]['white-light'],
spectra[order]['white-light_errors']]
# Load and fit dataset with juliet (save them to order*_juliet_results):
spectra[order]['dataset'] = juliet.load(priors=priors, t_lc=times, y_lc=fluxes,
yerr_lc=fluxes_error, ld_laws='squareroot',
out_folder=order+'_juliet_results')
spectra[order]['results'] = spectra[order]['dataset'].fit(use_dynesty=True, dynamic=True, dynesty_nthreads=4)
iter: 63615 | batch: 5 | bound: 771 | nc: 25 | ncall: 2925454 | eff(%): 2.175 | loglstar: 9536.493 < 9543.099 < 9541.432 | logz: 9494.670 +/- 0.299 | stop: 0.914
Let's see what the fits look like:
plt.figure(figsize=(15, 5))
for i in [1,2]:
plt.subplot('22'+str(i))
plt.title('Order ' + str(i) + ' (white) lightcurve')
order = 'order' + str(i)
# First, extract estimated additional errorbars form our fits:
sigma_w = np.median(spectra[order]['results'].posteriors['posterior_samples']['sigma_w_SOSS'])
spectra[order]['sigma_w'] = sigma_w
# Extract estimated time-of-transit center:
t0 = np.median(spectra[order]['results'].posteriors['posterior_samples']['t0_p1'])
# Normalize times to plot by this:
tnorm = (spectra['times'] - t0) * 24
plt.errorbar(tnorm, spectra[order]['white-light'],
yerr=np.sqrt(spectra[order]['white-light_errors']**2 + (sigma_w*1e-6)**2),
label='Data')
# Plot best-fit model on top:
spectra[order]['model'] = spectra[order]['results'].lc.evaluate('SOSS')
plt.plot(tnorm, spectra[order]['model'], color='black', label='Model')
plt.xlim(np.min(tnorm), np.max(tnorm))
plt.ylabel('Relative flux')
# Residuals:
plt.subplot('22' + str(i + 2))
spectra[order]['residuals'] = spectra[order]['white-light'] - spectra[order]['model']
plt.errorbar(tnorm, spectra[order]['residuals']*1e6,
yerr=np.sqrt(spectra[order]['white-light_errors']**2 + (sigma_w*1e-6)**2)*1e6,
fmt='o', alpha=0.3)
plt.xlim(np.min(tnorm), np.max(tnorm))
plt.ylim(-350, 350)
average_total_errorbar = np.median(np.sqrt(spectra[order]['white-light_errors']**2 + (sigma_w * 1e-6)**2) * 1e6)
plt.text(1, 220, r'$\sigma_w = {0:.1f}$ ppm'.format(average_total_errorbar), fontsize=13)
plt.ylabel('Residuals (ppm)')
plt.xlabel('Time from mid-transit (hours)')
Those look pretty good! The precisions more or less match what is expected from NIRISS/SOSS white-light lightcurves. What about the posterior distribution of the parameters?
# Names of the parameters to show in the corner plot:
names = [r'$t_0 - Med[t_0]$ (s)','$R_p/R_s$', '$b = a \cos i$', r'$\rho_*$', r'$u_1$', r'$u_2$']
# Retrieve posterior distributions of parameters for Order 1:
Theta1 = np.zeros([len(spectra['order1']['results'].posteriors['posterior_samples']['t0_p1']), 6])
Theta1[:,0] = spectra['order1']['results'].posteriors['posterior_samples']['t0_p1']
# Convert r1 and r2 sampling scheme to rp/rs and b:
b_1, rprs_1 = juliet.utils.reverse_bp(spectra['order1']['results'].posteriors['posterior_samples']['r1_p1'],
spectra['order1']['results'].posteriors['posterior_samples']['r2_p1'], 0., 1)
Theta1[:,1], Theta1[:,2] = rprs_1, b_1
# Extract stellar density:
Theta1[:,3] = spectra['order1']['results'].posteriors['posterior_samples']['rho']
# Convert q1 and q2 sampling to u1 and u2:
u1_1, u2_1 = juliet.utils.reverse_ld_coeffs('squareroot',
spectra['order1']['results'].posteriors['posterior_samples']['q1_SOSS'],
spectra['order1']['results'].posteriors['posterior_samples']['q2_SOSS'])
Theta1[:,4], Theta1[:,5] = u1_1, u2_1
# Do the same for Order 2:
Theta2 = np.zeros([len(spectra['order2']['results'].posteriors['posterior_samples']['t0_p1']), 6])
Theta2[:,0] = spectra['order2']['results'].posteriors['posterior_samples']['t0_p1']
# Convert r1 and r2 sampling scheme to rp/rs and b:
b_2, rprs_2 = juliet.utils.reverse_bp(spectra['order2']['results'].posteriors['posterior_samples']['r1_p1'],
spectra['order2']['results'].posteriors['posterior_samples']['r2_p1'], 0., 1)
Theta2[:,1], Theta2[:,2] = rprs_2, b_2
# Extract stellar density:
Theta2[:,3] = spectra['order2']['results'].posteriors['posterior_samples']['rho']
# Convert q1 and q2 sampling to u1 and u2:
u1_2, u2_2 = juliet.utils.reverse_ld_coeffs('squareroot',
spectra['order2']['results'].posteriors['posterior_samples']['q1_SOSS'],
spectra['order2']['results'].posteriors['posterior_samples']['q2_SOSS'])
Theta2[:,4], Theta2[:,5] = u1_2, u2_2
# Plot t0 minus combined median t0 for better plotting:
median_t0 = np.median(np.append(Theta1[:,0], Theta2[:,0]))
Theta1[:,0] = (Theta1[:,0] - median_t0) * 24 * 3600
Theta2[:,0] = (Theta2[:,0] - median_t0) * 24 * 3600
# Corner plot for redder order (Order 1)
figure = corner.corner(Theta1, labels=names, color='orangered')
# Same for order 2:
corner.corner(Theta2, fig=figure, color='cornflowerblue')
plt.show()
All right! It seems as if all the posteriors are more or less consistent with each other. However, there is a clear separation in the limb-darkening coefficients plane (despite the average values of the coefficients being consistent) --- this is expected; the profiles are expected to be different for both orders given the different wavelength ranges they encompass.
One might wonder if there is any structure in the residuals. This structure can give rise to signals that, if not accounted for, might led us to believe we have a precision that is much better that what the dataset actually has to offer. A classic approach to performing a quick check on the residuals is to see if, as you bin more datapoints, their rms decreases with the square-root of the number of datapoints. If the data is distributed as gaussian random noise, then one should see a $1/\sqrt{N}$ decline in this plot, where $N$ is the number of datapoints in a given bin. This is an easy check to make in this case:
plt.figure(figsize=(15, 5))
bin_sizes = np.arange(10, 1000, 10)
for i in [1, 2]:
order = 'order' + str(i)
plt.subplot('12' + str(i))
plt.title('Order ' + str(i) + ' residuals as a function of bin-size')
rms = np.zeros(len(bin_sizes))
for j in range(len(bin_sizes)):
bin_size = bin_sizes[j]
binned_times, binned_residuals, binned_errors = juliet.utils.bin_data(spectra['times'],
spectra[order]['residuals'],
bin_size)
rms[j] = np.sqrt(np.var(binned_residuals)) * 1e6
plt.plot(bin_sizes, rms, label='White-light lightcurve')
plt.plot(bin_sizes, (np.sqrt(bin_sizes[0]) * rms[0]) / np.sqrt(bin_sizes), label='Expected ($1/\sqrt{N}$)')
plt.xscale('log')
plt.ylabel('RMS (ppm)')
plt.xlabel('Bin size')
plt.legend()
<matplotlib.legend.Legend at 0x7f109709ca90>
The plots look very promising, but it is hard to see if there is any particular frequency at which we should be paying attention to. What about the power spectrum of the residuals? This basically transforms the time-series to Fourier space where we can actually see if there is any evidence for residual signals at different time-scales/frequencies:
plt.figure(figsize=(15, 5))
# Define frequencies in hours. From more or less the duration of the observation (~1 over 10 hours) to the
# time-sampling (~1 over 1 minute, so over 1/60 hours)
frequency = np.linspace(1. / 10., 1. / (1. / 60.), 1000)
for i in [1, 2]:
order = 'order' + str(i)
plt.subplot('12' + str(i))
plt.title('Order ' + str(i) + ' Residuals Power Spectral Density (PSD)')
ls = LombScargle((spectra['times'] - spectra['times'][0]) * 24, spectra[order]['residuals'])
psd = ls.power(frequency)
max_power = np.max(psd)
max_freq = frequency[np.where(max_power == psd)[0]][0]
fap = ls.false_alarm_probability(max_power)
print('Maximum power: {0:.4f}, frequency: {1:.2f}, FAP: {2:.2f}%'.format(max_power, max_freq, fap * 100))
plt.plot(frequency,psd)
plt.ylabel('Power Spectral Density')
plt.xlabel('Frequency (1/hr)')
plt.xlim([1. / 10.,60.])
Maximum power: 0.0099, frequency: 23.60, FAP: 100.00% Maximum power: 0.0129, frequency: 13.41, FAP: 97.42%
This looks fairly good! There is apparently no clear peak in these power spectral densities, with peaks evenly distributed across all frequencies --- i.e., the noise pattern appears to be fairly "white". We thus move ahead with that hypothesis, and assume there is no residual signal and/or correlated noise in our fits left to mode.
One of the main products that ought to be analyzed when observing transits with JWST are the wavelength-dependent lightcurves. Fitting those lets us obtain the final product of our observations: its transmission spectrum. To retrieve this, let us fit the wavelength dependent lightcurves for each order. Unlike as with HST and ground-based low resolution lightcurves, JWST's precision is so impressive in cases like this that we won't need to bin the lightcurves --- we can work at the (extracted) "pixel"-level!
First, let's extract those lightcurves, and lets plot them in a slightly different way than above: let's create a matrix, where the rows are the wavelengths, and the columns are the time-stamps. The values of this matrix will be the relative fluxes:
# Let's iterate then to get all the lightcurves:
matrices = {}
ntimes = len(spectra['times'])
for order in ['order1', 'order2']:
# Create the matrix for each order
nwavs = len(spectra[order]['w'])
matrices[order] = np.zeros([nwavs, ntimes])
for i in range(nwavs):
spectra[order]['wbin'+str(i)] = {}
# Get central wavelength and limits:
spectra[order]['wbin' + str(i)]['w'] = spectra[order]['w'][i]
if i == 0:
dwav_min = np.abs((spectra[order]['w'][i+1] - spectra[order]['w'][i]) / 2.)
dwav_max = dwav_min
elif i == nwavs-1:
dwav_min = np.abs((spectra[order]['w'][i] - spectra[order]['w'][i-1]) / 2.)
dwav_max = dwav_min
else:
dwav_min = np.abs(spectra[order]['w'][i] - spectra[order]['w'][i-1]) / 2.
dwav_max = np.abs(spectra[order]['w'][i+1] - spectra[order]['w'][i]) / 2.
# Extract lightcurve and errors:
spectra[order]['wbin' + str(i)]['lc'] = spectra[order]['flux'][:,i]
spectra[order]['wbin' + str(i)]['lc_errors'] = spectra[order]['flux_errors'][:,i]
# Median normalize the extracted fluxes:
median_flux = np.median(spectra[order]['wbin' + str(i)]['lc'][:100])
spectra[order]['wbin' + str(i)]['lc'] = spectra[order]['wbin' + str(i)]['lc'] / median_flux
spectra[order]['wbin' + str(i)]['lc_errors'] = spectra[order]['wbin' + str(i)]['lc_errors'] / median_flux
matrices[order][i,:] = spectra[order]['wbin' + str(i)]['lc']
times_hours = (spectra['times'] - spectra['times'][0]) * 24
# Let's plot them:
fig, (ax1, ax2) = plt.subplots(1, 2, figsize=(15, 10))
# First, order 1:
plt.title('Order 1 lightcurves')
plt.xlabel('Time since start of observation')
plt.ylabel('Wavelength')
im1 = ax1.imshow(matrices['order1'], interpolation='none')
im1.set_clim(0.985, 1.01)
# Y axis:
ticks = np.arange(0, len(spectra['order1']['w']), 250)
ticklabels = ["{:0.2f}".format(spectra['order1']['w'][i]) for i in ticks]
ax1.set_yticks(ticks)
ax1.set_yticklabels(ticklabels)
ax1.set_ylabel(r'Wavelength ($\mu$m)')
# X axis:
ticks = np.arange(0, ntimes, 100)
ticklabels = ["{:0.2f}".format(times_hours[i]) for i in ticks]
ax1.set_xticks(ticks)
ax1.set_xticklabels(ticklabels)
ax1.set_xlabel('Hours since start of observations')
# Repeat for order 2:
plt.title('Order 2 lightcurves')
plt.xlabel('Time since start of observation')
plt.ylabel('Wavelength')
im2 = ax2.imshow(matrices['order2'],interpolation='none')
im2.set_clim(0.985, 1.01)
# Y axis:
ticks = np.arange(0, len(spectra['order2']['w']), 250)
ticklabels = ["{:0.2f}".format(spectra['order2']['w'][i]) for i in ticks]
ax2.set_yticks(ticks)
ax2.set_yticklabels(ticklabels)
ax2.set_ylabel(r'Wavelength ($\mu$m)')
# X axis:
ticks = np.arange(0, ntimes, 100)
ticklabels = ["{:0.2f}".format(times_hours[i]) for i in ticks]
ax2.set_xticks(ticks)
ax2.set_xticklabels(ticklabels)
ax2.set_xlabel('Hours since start of observations')
fig.colorbar(im1, shrink=0.4, label='Relative flux')
<matplotlib.colorbar.Colorbar at 0x7f10a3550350>
Nice! As can be seen, this plot comes very handy. First of all, note how the transit is marked in the dark regions at all wavelengths where it is expected (i.e., where we saw them in the white-light lightcurves --- time of transit around 3.5 hours since start of observations). One can actually see how the lightcurves get noisier for longer wavelengths, which makes sense with the image plots (the traces fade to the left of all of our image/trace plots). On top of this, we can see that some lightcurves seem to be a bit noisier/corrupted than usual for Order 1 at the shorter wavelengths. We won't worry too much about those here, but this is a very important application of this kind of plots, which allow us to visually see outlier lightcurves in one simple image.
Let's now fit those lightcurves. To this end, we once again use juliet --- however, we fix the ephemerides and orbital parameters ($P$, $t_0$, $a/R_*$ and $b$) to those found in the white-light analysis, as they should be wavelength-independent. To this end, we combine the posterior parameters from Orders 1 and 2. In our fits, thus, we only leave the transit depth, the out-of-transit flux and the "jitter" as free parameters.
Let's first combine the orbital parameters using the posterior distributions of our white-light fits:
t0_median = np.median(np.append(spectra['order1']['results'].posteriors['posterior_samples']['t0_p1'],
spectra['order2']['results'].posteriors['posterior_samples']['t0_p1']))
b_median = np.median(np.append(b_1, b_2))
rho_median = np.median(np.append(spectra['order1']['results'].posteriors['posterior_samples']['rho'],
spectra['order2']['results'].posteriors['posterior_samples']['rho']))
print(t0_median, b_median, rho_median)
2459775.8911330667 0.748238818509106 1017.3252150608885
And define the priors of our fit using those:
# Name of the parameters to be fit:
params = ['P_p1', 't0_p1', 'rho', 'b_p1', 'q1_SOSS', 'q2_SOSS', 'ecc_p1', 'omega_p1',
'p_p1', 'mdilution_SOSS', 'mflux_SOSS', 'sigma_w_SOSS']
# Distributions:
dists = ['fixed', 'fixed', 'fixed', 'fixed', 'uniform', 'uniform', 'fixed', 'fixed',
'uniform', 'fixed', 'normal', 'loguniform']
# Hyperparameters:
hyperps = [4.4652997979, t0_median, rho_median, b_median, [0, 1], [0, 1], 0.0, 90.,
[0., 0.2], 1.0, [0., 0.1], [0.1, 1000.]]
priors = juliet.generate_priors(params, dists, hyperps)
And now we ingest those values into our priors, and fit the wavelength-dependent lightcurves. Note that performing the fitting below with NIRISS/SOSS' native resolution can take days (we need to run almost 3,000 juliet runs). Therefore, it is important to consider parallelizing these runs/fits in a real application.
For simplicity (and speed) in rendering this notebook, we have pickled and uploaded the results of these fits --- these were downloaded in the second cell of this notebook above. By default, thus, this notebook will read these downloaded results directly. If you wish to run the fits on your own, however, you can set the use_downloaded_results variable below to False:
use_downloaded_results = True
If the fits are ran, we create a matrix similar to the above, but now for the residuals of the wavelength-dependent light curves. If results were downloaded, these have been already generated, so we just extract them from these products:
# If use_downloaded_results is True, read results from downloaded file. If False, run fits and save results.
residual_matrices = {}
if use_downloaded_results:
transpec_results = pickle.load(open(transit_fits_results_path, 'rb'))
for order in ['order1', 'order2']:
residual_matrices[order] = transpec_results[order]['transmission_spectrum']['residual_matrix']
spectra[order]['transmission_spectrum'] = {}
spectra[order]['transmission_spectrum']['wavelengths'] = transpec_results[order]['transmission_spectrum']['wavelengths']
spectra[order]['transmission_spectrum']['depths'] = transpec_results[order]['transmission_spectrum']['depths']
spectra[order]['transmission_spectrum']['errors'] = transpec_results[order]['transmission_spectrum']['errors']
else:
for order in ['order1', 'order2']:
spectra[order]['transmission_spectrum'] = {}
spectra[order]['transmission_spectrum']['wavelengths'] = np.array([])
spectra[order]['transmission_spectrum']['depths'] = np.array([])
spectra[order]['transmission_spectrum']['errors']= np.array([])
nwavs = len(spectra[order]['w'])
residual_matrices[order] = np.zeros([nwavs, ntimes])
for i in range(nwavs):
# Save dataset in juliet format:
times, fluxes, fluxes_error, norm_times = {},{},{}, {}
times['SOSS'], fluxes['SOSS'], fluxes_error['SOSS'] = spectra['times'],\
spectra[order]['wbin' + str(i)]['lc'],\
spectra[order]['wbin' + str(i)]['lc_errors']
# Load and fit dataset with juliet (save them to order*_juliet_results):
spectra[order]['dataset'] = juliet.load(priors=priors, t_lc=times, y_lc=fluxes,
yerr_lc=fluxes_error, ld_laws='squareroot',
out_folder=order + '_wbin_' + str(i) + '_juliet_results')
spectra[order]['results'] = spectra[order]['dataset'].fit()
# Save transmission spectrum for plotting later:
spectra[order]['transmission_spectrum']['wavelengths'] = np.append(spectra[order]['transmission_spectrum']['wavelengths'],
spectra[order]['wbin'+str(i)]['w'])
depths = ((spectra[order]['results'].posteriors['posterior_samples']['p_p1'])**2) * 1e6
spectra[order]['transmission_spectrum']['depths'] = np.append(spectra[order]['transmission_spectrum']['depths'],
np.median(depths))
spectra[order]['transmission_spectrum']['errors'] = np.append(spectra[order]['transmission_spectrum']['errors'],
np.sqrt(np.var(depths)))
# Save residuals:
residual_matrices[order][i,:] = (spectra[order]['wbin'+str(i)]['lc'] - \
spectra[order]['results'].lc.evaluate('SOSS')) * 1e6
# Plot fits:
fig,(ax1,ax2) = plt.subplots(1, 2, figsize=(15, 10))
# First, order 1:
plt.title('Order 1 lightcurve residuals')
plt.xlabel('Time since start of observation')
plt.ylabel('Wavelength')
im1 = ax1.imshow(residual_matrices['order1'], interpolation='none')
im1.set_clim(-1000, 1000)
# Y axis:
ticks = np.arange(0, len(spectra['order1']['w']), 250)
ticklabels = ["{:0.2f}".format(spectra['order1']['w'][i]) for i in ticks]
ax1.set_yticks(ticks)
ax1.set_yticklabels(ticklabels)
ax1.set_ylabel(r'Wavelength ($\mu$m)')
# X axis:
ticks = np.arange(0, ntimes, 100)
ticklabels = ["{:0.2f}".format(times_hours[i]) for i in ticks]
ax1.set_xticks(ticks)
ax1.set_xticklabels(ticklabels)
ax1.set_xlabel('Hours since start of observations')
# Repeat for order 2:
plt.title('Order 2 lightcurve residuals')
plt.xlabel('Time since start of observation')
plt.ylabel('Wavelength')
im2 = ax2.imshow(residual_matrices['order2'], interpolation='none')
im2.set_clim(-1000, 1000)
# Y axis:
ticks = np.arange(0, len(spectra['order2']['w']), 250)
ticklabels = ["{:0.2f}".format(spectra['order2']['w'][i]) for i in ticks]
ax2.set_yticks(ticks)
ax2.set_yticklabels(ticklabels)
ax2.set_ylabel(r'Wavelength ($\mu$m)')
# X axis:
ticks = np.arange(0, ntimes, 100)
ticklabels = ["{:0.2f}".format(times_hours[i]) for i in ticks]
ax2.set_xticks(ticks)
ax2.set_xticklabels(ticklabels)
ax2.set_xlabel('Hours since start of observations')
fig.colorbar(im1, shrink=0.4, label='Residuals (ppm)')
<matplotlib.colorbar.Colorbar at 0x7f10cffdcd10>
All right! That looks great. The residual image seems to be just random noise from the looks of it!
Let's now see how our transmission spectrum looks. The model transmission spectrum we are using is taken from the ExoCTK website (https://exoctk.stsci.edu/generic), with properties that match that of HAT-P-1b. We'll plot the original, non-binned spectrum and also plot the binned spectrum on top:
plt.figure(figsize=(15, 5))
nbin = 21
# Plot order 1 transit spectra:
plt.errorbar(spectra['order1']['transmission_spectrum']['wavelengths'],
spectra['order1']['transmission_spectrum']['depths'],
spectra['order1']['transmission_spectrum']['errors'],
fmt='.', mfc='orangered', mec='orangered', ecolor='orangered',
ms=8, elinewidth=1, alpha=0.05, label=None)
# Bin:
wb1, db1, db1_err = juliet.utils.bin_data(spectra['order1']['transmission_spectrum']['wavelengths'],
spectra['order1']['transmission_spectrum']['depths'],
nbin)
plt.errorbar(wb1, db1, db1_err,
fmt='o', mfc='white',
mec='orangered', ecolor='orangered',
ms=8, elinewidth=1, label='NIRISS/SOSS Order 1')
# Same, order 2:
plt.errorbar(spectra['order2']['transmission_spectrum']['wavelengths'],
spectra['order2']['transmission_spectrum']['depths'],
spectra['order2']['transmission_spectrum']['errors'],
fmt='.', mfc='cornflowerblue', mec='cornflowerblue',
ecolor='cornflowerblue', ms=8, elinewidth=1, alpha=0.05, label = None)
wb2, db2, db2_err = juliet.utils.bin_data(spectra['order2']['transmission_spectrum']['wavelengths'],
spectra['order2']['transmission_spectrum']['depths'],
nbin)
plt.errorbar(wb2, db2, db2_err,
fmt='o', mfc='white',
mec='cornflowerblue', ecolor='cornflowerblue',
ms=8, elinewidth=1, label='NIRISS/SOSS Order 2')
# Plot the transit spectrum model:
wavelength_model, transit_depth_model = np.loadtxt(transit_model_path, unpack=True)
plt.plot(wavelength_model, transit_depth_model*1e6, color='black', lw=1, label='Model spectrum')
# Mark some features in the plot:
plt.text(0.6, 14800, 'Na', fontsize=13)
plt.text(0.78, 14600, 'K', fontsize=13)
plt.text(1.15, 14300, 'H$_2$O', fontsize=13)
plt.text(1.4, 14500, 'H$_2$O', fontsize=13)
plt.text(1.85, 14550, 'H$_2$O', fontsize=13)
plt.text(2.5, 14700, 'H$_2$O', fontsize=13)
# Define plot limits:
plt.xlim(0.54, 2.85)
plt.ylim(14000, 15000)
plt.ylabel('Transit depth (ppm)')
plt.xlabel('Wavelength ($\mu$m)')
plt.legend()
<matplotlib.legend.Legend at 0x7f10b97b55d0>
Interesting! There are clear hints of the Na and K features in the blue-optical bandpass of Order 2. The Order 1 transmission spectrum, on the other hand, shows a very clear detection of several water bands in the entire NIRISS/SOSS range. There seems to be some problems at around ~1.75-2 microns where the contamination overlap between order 1 and 2 is largest, which is expected given our simple extraction routine used above.
Overall, these results showcase the power of NIRISS/SOSS observations for targeting both optical and near-infrared features in the transit spectrum of HAT-P-1b.